-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
in buffer batching with perf buffers for NPM #31402
base: main
Are you sure you want to change the base?
Conversation
2f1f991
to
99dae30
Compare
Go Package Import DifferencesBaseline: 92348d9
|
Test changes on VMUse this command from test-infra-definitions to manually test this PR changes on a VM: inv aws.create-vm --pipeline-id=51732718 --os-family=ubuntu Note: This applies to commit 575cbf0 |
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: 92348d9 Optimization Goals: ✅ No significant changes detected
|
perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
---|---|---|---|---|---|---|
➖ | quality_gate_logs | % cpu utilization | +4.20 | [+0.94, +7.47] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency_http1 | egress throughput | +0.18 | [-0.69, +1.05] | 1 | Logs |
➖ | quality_gate_idle_all_features | memory utilization | +0.12 | [+0.04, +0.21] | 1 | Logs bounds checks dashboard |
➖ | file_to_blackhole_1000ms_latency | egress throughput | +0.05 | [-0.72, +0.83] | 1 | Logs |
➖ | quality_gate_idle | memory utilization | +0.01 | [-0.03, +0.05] | 1 | Logs bounds checks dashboard |
➖ | file_to_blackhole_500ms_latency | egress throughput | +0.01 | [-0.76, +0.78] | 1 | Logs |
➖ | uds_dogstatsd_to_api | ingress throughput | +0.00 | [-0.12, +0.12] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency | egress throughput | +0.00 | [-0.84, +0.84] | 1 | Logs |
➖ | tcp_dd_logs_filter_exclude | ingress throughput | +0.00 | [-0.01, +0.01] | 1 | Logs |
➖ | otel_to_otel_logs | ingress throughput | -0.01 | [-0.67, +0.66] | 1 | Logs |
➖ | file_to_blackhole_100ms_latency | egress throughput | -0.01 | [-0.69, +0.67] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency_http2 | egress throughput | -0.04 | [-0.88, +0.80] | 1 | Logs |
➖ | file_to_blackhole_300ms_latency | egress throughput | -0.07 | [-0.71, +0.57] | 1 | Logs |
➖ | file_to_blackhole_1000ms_latency_linear_load | egress throughput | -0.16 | [-0.63, +0.31] | 1 | Logs |
➖ | file_tree | memory utilization | -0.49 | [-0.61, -0.36] | 1 | Logs |
➖ | tcp_syslog_to_blackhole | ingress throughput | -0.70 | [-0.79, -0.61] | 1 | Logs |
➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | -2.75 | [-3.42, -2.08] | 1 | Logs |
Bounds Checks: ❌ Failed
perf | experiment | bounds_check_name | replicates_passed | links |
---|---|---|---|---|
❌ | file_to_blackhole_500ms_latency | lost_bytes | 9/10 | |
✅ | file_to_blackhole_0ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http1 | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http1 | memory_usage | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http2 | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http2 | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency_linear_load | memory_usage | 10/10 | |
✅ | file_to_blackhole_100ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_300ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_300ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | |
✅ | quality_gate_idle | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_idle_all_features | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_logs | lost_bytes | 10/10 | |
✅ | quality_gate_logs | memory_usage | 10/10 |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PR changes classification code base which is owned by USM
Blocking the PR to ensure we can review it and verify there's no concern on our side
A side note - this is another large PR. Please try to split it to smaller pieces.
@guyarb do we need to update CODEOWNERS to reflect this? |
@@ -36,7 +36,7 @@ BPF_PERF_EVENT_ARRAY_MAP(conn_close_event, __u32) | |||
* or BPF_MAP_TYPE_PERCPU_ARRAY, but they are not available in | |||
* some of the Kernels we support (4.4 ~ 4.6) | |||
*/ | |||
BPF_HASH_MAP(conn_close_batch, __u32, batch_t, 1024) | |||
BPF_HASH_MAP(conn_close_batch, __u32, batch_t, 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I think typically we set map sizes to 0 when we intend to overwrite them in userspace
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is for maps the must be resized. This can remain at 1
if it is not being used, but must be included because the code references it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
makes sense 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think setting this to 0 is still a good safeguard. We can set this to 1 or ideally remove this from the map spec if not required, at load time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't know we could remove maps from the spec at load time? If so it's likely a trivial difference in memory footprint but a good pattern nonetheless
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think setting this to 0 is still a good safeguard
Safeguard against what? The default configuration all matches at the moment. Changing this to 0
means, that you must resize the map, even if using the default value for whether or not to do the custom batching.
ideally remove this from the map spec if not required, at load time
I don't think we can completely remove the map spec. This is because there is still code that references that map, even though it is protected by a branch that will never get taken.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Safeguard against what?
Against loading the map with max entries set to 1, because the userspace forgot to resize it. This may happen during a refactor, when someone moves the code around. Having a default value of 0
forces the userspace to think about the correct value under all conditions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
max entries set to 1 is the desired value when custom batching is disabled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you forgot to resize, then the batch manager will fail loudly when it is trying to setup the default map values.
eBPF complexity changesSummary result: 🎉 - improved
tracer detailstracer [programs with changes]
tracer [programs without changes]
tracer_fentry detailstracer_fentry [programs with changes]
tracer_fentry [programs without changes]
This report was generated based on the complexity data for the current branch bryce.kahle/perf-buffer-npm-only (pipeline 51732718, commit 575cbf0) and the base branch main (commit 92348d9). Objects without changes are not reported. Contact #ebpf-platform if you have any questions/feedback. Table complexity legend: 🔵 - new; ⚪ - unchanged; 🟢 - reduced; 🔴 - increased |
pkg/config/setup/system_probe.go
Outdated
@@ -194,6 +194,7 @@ func InitSystemProbeConfig(cfg pkgconfigmodel.Config) { | |||
cfg.BindEnv(join(netNS, "max_failed_connections_buffered")) | |||
cfg.BindEnvAndSetDefault(join(spNS, "closed_connection_flush_threshold"), 0) | |||
cfg.BindEnvAndSetDefault(join(spNS, "closed_channel_size"), 500) | |||
cfg.BindEnvAndSetDefault(join(netNS, "closed_buffer_wakeup_count"), 5) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the plan to migrate other perf-buffers to this technique?
Do we plan to create a different configuration per perf-buffer?
Maybe we should have a single configuration for all perf-buffers, and allow different teams to create a dedicate configuration to override it
cfg.BindEnvAndSetDefault(join(spNS, "common_wakeup_count"), 5)
cfg.BindEnv(join(netNS, "closed_buffer_wakeup_count"))
in adjust_npm.go
applyDefault(cfg, netNS("closed_buffer_wakeup_count"), cfg.GetInt(spNS("common_wakeup_count")))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the plan to migrate other perf-buffers to this technique?
I was keeping each team to a separate PR. I wanted to consult first to ensure it was actually a feature they wanted.
Do we plan to create a different configuration per perf-buffer?
Yes, because how much you want to keep in the buffer before wakeup is a usecase-specific value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be specified in terms of bytes or maybe percentages, so the code can calculate the appropriate count based on the size of the records?
For example if we want a flush to happen when the perf buffer is at 25% percent capacity, then this config value can specify that (either as percentage or bytes), and the code can calculate the appropriate count based on the size of the perf buffer and the record items.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds like something that could be addressed in a future PR by NPM folks. That is additional complexity that I don't think is necessary for this PR, which is trying to closely match the behavior from the custom batching.
pkg/ebpf/perf/event.go
Outdated
if e.opts.UseRingBuffer && features.HaveMapType(ebpf.RingBuf) == nil { | ||
if e.opts.UpgradePerfBuffer { | ||
if ms.Type != ebpf.PerfEventArray { | ||
return fmt.Errorf("map %q is not a perf buffer, got %q instead", e.opts.MapName, ms.Type.String()) | ||
} | ||
UpgradePerfBuffer(mgr, mgrOpts, e.opts.MapName) | ||
} else if ms.Type != ebpf.RingBuf { | ||
return fmt.Errorf("map %q is not a ring buffer, got %q instead", e.opts.MapName, ms.Type.String()) | ||
} | ||
|
||
if ms.MaxEntries != uint32(e.opts.RingBufOptions.BufferSize) { | ||
ResizeRingBuffer(mgrOpts, e.opts.MapName, e.opts.RingBufOptions.BufferSize) | ||
} | ||
e.initRingBuffer(mgr) | ||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please document whatever is going here. It is hard to follow
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tweaked the code a bit. Let me know if it is easier to follow now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually added a few minor comments now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see any comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it got refactored, so should be much more readable now. If there are still things that are confusing, please give me specifics so I can add comments for those.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The original comment is still valid, hard to read the function, please add documentation to understand what's going on here
0b82725
to
0f1ed30
Compare
@guyarb Can you review again? This is ready to go. |
Uncompressed package size comparisonComparison with ancestor Diff per package
Decision |
pkg/ebpf/perf/event.go
Outdated
func updateMaxTelemetry(a *atomic.Uint64, val uint64) { | ||
for { | ||
oldVal := a.Load() | ||
if val <= oldVal { | ||
return | ||
} | ||
if a.CompareAndSwap(oldVal, val) { | ||
return | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please document the function and why you're doing it in that way
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean "that way"? This is how you would atomically update a max value
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
then please document
/merge |
Devflow running:
|
What does this PR do?
network_config.enable_custom_batching
for NPM. This enables the status quo custom batching.Important
network_config.enable_custom_batching
is false by default, which means it must be enabled to restore the previous behavior.Motivation
Describe how to test/QA your changes
Possible Drawbacks / Trade-offs
Note
The current (and unchanged) configuration defaults to using ring buffers, if available.
Additional Notes
EBPF-481